Goto

Collaborating Authors

 self-driving uber


Backup driver for self-driving Uber that killed Arizona pedestrian pleads guilty

The Guardian

The backup Uber driver for a self-driving vehicle that killed a pedestrian in suburban Phoenix in 2018 pleaded guilty Friday to endangerment in the first deadly crash involving a fully autonomous car. Arizona state judge David Garbarino, who accepted the plea agreement, sentenced Rafaela Vasquez to three years of supervised probation for the crash that killed 49-year-old Elaine Herzberg. Vasquez, 49, told police that Herzberg "came out of nowhere" and that she didn't see Herzberg before hitting her on a darkened Tempe street on 18 March 2018. Vasquez had been charged with felony negligent homicide. The charge to which she pleaded could be reclassified as a misdemeanor if she completes probation. Authorities say Vasquez was streaming the television show The Voice on a phone and looking down in the moments before Uber's Volvo XC-90 SUV struck Herzberg, who was crossing with her bicycle.


Can a former model predict your future? A million Turkish users say yes

#artificialintelligence

Sertaç Taşdelen, a Turkish entrepreneur and creator of the fortune-telling app Faladdin, does his best to resemble Aladdin's genie. When I recently visited his coworking office in downtown Istanbul, Taşdelen was wearing an electric blue jacket, white trousers, and a fluffy button-down shirt, and his bearded, square-jawed face carried the mischievous smile of the fictional jinn. "Faladdin is my alter ego," Taşdelen said of the psychic he plays in the app. "If I quit business today," he whispered, leaning in, "I'd be a gypsy fortune-teller living in a caravan." In Apple's App Store, Faladdin describes itself as "far beyond a fortune telling app." The description states that it can predict one's destiny "by evaluating a person's past." It does this by bringing the Turkish tradition of coffee fortune-telling into the Internet age.


Waymo admits the human driver was to blame for a crash that injured a motorcyclist

Daily Mail - Science & tech

A self-driving car that collided with and injured a motorcyclist was caused by the human back-up driver. Waymo, the autonomous car division of Google's parent firm Alphabet, revealed the human driver took control of the vehicle before crashing last month. According to Waymo's simulations after the accident, the car would have slowed down and avoided a collision if left to its own devices. Waymo has admitted the fault of the incident lies with the driver and not with its technology. Waymo, the autonomous car division of Google's parent firm Alphabet, revealed the human driver took control of the vehicle before crashing last month The unfortunate incident occurred when the driver felt the need to take control of the Waymo minivan and merge into the outside lane from the centre lane on the highway.


Self-Driving Uber That Hit Pedestrian Wasn't Set to Stop in an Emergency

WSJ.com: WSJD - Technology

The crash marked the first pedestrian death involving a self-driving car and ignited a broader discussion about whether the driverless technology that auto and tech companies are racing to develop is ready for the real world. It also illustrates the challenges Uber has faced in developing software that can detect hazards on the road and respond appropriately, as the ride-hailing company chases rivals such as Alphabet Inc.'s Waymo and General Motors Co.'s Cruise Automation, which aim to deploy robot taxis that could pose as a threat to Uber's business. Uber was testing a fleet of Volvo Cars sport-utility vehicles that come equipped with automatic emergency braking and other safety features. The vehicles, however, were modified by the ride-hailing company, which equipped them with cameras, sensors and onboard computers. An operator rides in each vehicle, prepared to take the wheel to ensure safety as needed.


NTSB report says self-driving Uber saw pedestrian 6 seconds before deadly crash

FOX News

Raw video: Cameras mounted inside the car catches the fatal moment. Authorites are investigating the cause of the crash. The self-driving Uber SUV that struck and killed Elaine Herzberg in Tempe, Ariz., in March picked her up on its sensors six seconds before it hit her, but did not determine that it needed to stop or evade her until it was too late, according to federal investigators. Herzberg was jaywalking her bicycle across a four-lane section of road on the night of March 18 when the Volvo XC90 SUV ran into her. A preliminary report on the accident from the National Transportation Safety Board issued on Thursday said that a review of the data from the car shows that it first identified her as an unknown object, then as a vehicle and finally as a bicycle.


Self-Driving Uber In Fatal Accident Had 6 Seconds To React Before Crash

Huffington Post - Tech news and opinion

The NTSB report also noted the pedestrian tested positive for methamphetamine and marijuana. That information is of limited use ― the Uber should have identified and avoided her either way ― but it does potentially provide insight into why she didn't cross at the crosswalk 360 feet to the north, and why she was unaware of the vehicle until immediately before impact.


We Now Know Why the Self-Driving Uber That Killed a Pedestrian Didn't Brake

Slate

Uber's self-driving vehicles operating in Arizona were unprepared to safely encounter pedestrians and were fatally over-reliant on the mindfulness of human operators, a federal accident report released Thursday shows. On March 18, Uber's Volvo XC90 was being driven by software but supervised by a human attendant in the driver's seat when it hit and killed Elaine Herzberg, who was crossing the darkened road with her bicycle. It was the first fatal crash involving a vehicle driven by a computer, a technology that promises long-term safety improvements but has been rushed into road testing by a handful of companies despite questions about transparency and reliability. According to the preliminary report of the National Transportation Safety Board, Uber's sensors first perceived Herzberg about six seconds before impact--more than twice the commonly accepted reaction-time of 2.5 seconds. But the sensors struggled to classify Herzberg (first as an unknown object, then as a car, then as a bicycle) and determine her expected path across the road.


Emergency brake was disabled on self-driving Uber that killed woman

The Guardian

A federal investigation into a self-driving Uber SUV that hit and killed a pedestrian in March has found that the vehicle's emergency braking system was disabled. The preliminary report, issued by the National Transportation Safety Board, said on Thursday that while the vehicle's guidance system had spotted the woman about six seconds before hitting her, emergency braking manoeuvres were not enabled in order to "reduce the potential for erratic vehicle behavior". Instead, the Uber system anticipated that the human back-up driver would intervene. However, the automated system was not designed to alert the driver of the impending danger. The car was traveling at 43 miles per hour and its sensors determined that braking was needed 1.3 seconds before impact, according to the report.


Self-Driving Uber 'Saw' Pedestrian but Did Not Brake Before Fatal Crash, Investigators Say

TIME - Tech

The autonomous Uber SUV that struck and killed an Arizona pedestrian in March spotted the woman about six seconds before hitting her, but did not stop because the system used to automatically apply brakes in potentially dangerous situations had been disabled, according to federal investigators. In a preliminary report on the crash, the National Transportation Safety Board said Thursday that emergency braking is not enabled while Uber's cars are under computer control, "to reduce the potential for erratic vehicle behavior." Instead, Uber relies on a human backup driver to intervene. The system, however, is not designed to alert the driver. In the crash, the driver began steering less than a second before impact but didn't brake until less than a second after impact, according to the preliminary report, which does not determine fault.


Self-driving cars are NOT safe 'while in the wild', says the co-founder of Google's DeepMind

Daily Mail - Science & tech

The co-founder of Google's DeepMind has slammed self-driving cars for not being safe enough, saying current early tests on public roads are irresponsible. Demis Hassabis has urged developers to be cautious with the new technology, saying it is difficult to prove systems are safe before putting them on public roads. The issue of AI in self-driving cars has flared up this year following the death of a women hit but a self-driving Uber in March. The accident was the first time a pedestrian was killed on a public road by an autonomous car, which had previously been praised as the safer alternative to a traditional car. Speaking at the Royal Society in London, Dr Hassabis said current driverless car programmes could be putting people's lives in danger.